解决了与人类偏好的安全一致性以及学习效率之类的各种目的,越来越多的强化学习研究集中在依赖整个收益分配的风险功能上。关于\ emph {Oplicy风险评估}(OPRA)的最新工作,针对上下文匪徒引入了目标策略的收益率以及有限样本保证的一致估计量,并保证了(并同时保留所有风险)。在本文中,我们将OPRA提升到马尔可夫决策过程(MDPS),其中重要性采样(IS)CDF估计量由于有效样本量较小而遭受较长轨迹的较大差异。为了减轻这些问题,我们合并了基于模型的估计,以开发MDPS回报的CDF的第一个双重鲁棒(DR)估计器。该估计器的差异明显较小,并且在指定模型时,可以实现Cramer-Rao方差下限。此外,对于许多风险功能,下游估计值同时享有较低的偏差和较低的差异。此外,我们得出了非政策CDF和风险估计的第一个Minimax下限,这与我们的误差界限到恒定因子。最后,我们在几种不同的环境上实验表明了DR CDF估计的精度。
translated by 谷歌翻译
标准均匀收敛导致在假设类别上预期损失的概括差距。对风险敏感学习的出现需要超出预期损失分布的功能的概括保证。虽然先前的工作专门从事特定功能的均匀收敛,但我们的工作为一般的H \'较旧风险功能提供了统一的收敛,累积分配功能(CDF)的亲密关系(CDF)需要接近风险。我们建立了第一个统一的融合估计损失分布的CDF的结果,可以保证在所有H \“较旧的风险功能和所有假设上)同时保持。因此,我们获得了实现经验风险最小化的许可,我们开发了基于梯度的实用方法,以最大程度地减少失真风险(广泛研究的H \'H \'较旧风险涵盖了光谱风险,包括平均值,有条件价值,风险的有条件价值,累积前景理论风险和累积前景理论风险,以及其他)并提供融合保证。在实验中,我们证明了学习程序的功效,这是在均匀收敛结果和具有深层网络的高维度的设置中。
translated by 谷歌翻译
混合人类ML系统越来越多地负责各种领域的结果决策。越来越多的经验和理论工作已经提出了我们对这些系统的理解。但是,现有的经验结果混合在一起,理论建议通常是互不兼容的。在这项工作中,我们提出了一个理解条件的统一框架,在该框架下,将人类和ML的互补优势结合起来会导致比单独单独产生的决策更高的质量决策 - 我们称之为人类ML互补性。我们专门关注人类ML预测性决策的背景,并研究结合人类和ML预测性决策的最佳方法,这是其判断中基本变化来源的理解。在此范围内,我们提出了两个至关重要的贡献。首先,从心理学,机器学习和人类计算机互动中的先前文献进行决策和借鉴的计算观点,我们引入了一种分类学,描述了人类和机器决策不同的广泛标准。其次,将我们的分类法进行正式化,使我们能够研究人类和ML预测性决策应如何最佳地汇总。我们表明,我们提出的框架包括一些现有的人类ML互补模型作为特殊情况。最后但并非最不重要的一点是,对我们框架的初步探索性分析为未来在人类ML互补性方面的工作提供了关键的见解:我们结合人类和ML判断的机制应由其决策中的基本原因来告知。
translated by 谷歌翻译
在现实世界中,感知的信号通常是高维且嘈杂的,并且在下游决策任务所需的必要和充分信息中找到和使用其表示形式,将有助于提高任务中的计算效率和概括能力。在本文中,我们专注于部分可观察到的环境,并建议学习一组最小的状态表示,以捕获足够的决策信息以进行决策,称为\ textIt {动作充足的状态表示}(ASRS)。我们为系统中变量之间的结构关系构建了生成环境模型,并提出了一种基于结构约束的ASRS来表征ASR的原则方法,以及在政策学习中最大程度地提高累积奖励的目标。然后,我们开发一个结构化的顺序变异自动编码器来估计环境模型并提取ASRS。我们关于载载和Vizdoom的经验结果证明了学习和使用ASRS进行政策学习的明显优势。此外,估计的环境模型和ASR允许从紧凑的潜在空间中想象的结果中学习行为,以提高样品效率。
translated by 谷歌翻译
The click-through rate (CTR) prediction task is to predict whether a user will click on the recommended item. As mind-boggling amounts of data are produced online daily, accelerating CTR prediction model training is critical to ensuring an up-to-date model and reducing the training cost. One approach to increase the training speed is to apply large batch training. However, as shown in computer vision and natural language processing tasks, training with a large batch easily suffers from the loss of accuracy. Our experiments show that previous scaling rules fail in the training of CTR prediction neural networks. To tackle this problem, we first theoretically show that different frequencies of ids make it challenging to scale hyperparameters when scaling the batch size. To stabilize the training process in a large batch size setting, we develop the adaptive Column-wise Clipping (CowClip). It enables an easy and effective scaling rule for the embeddings, which keeps the learning rate unchanged and scales the L2 loss. We conduct extensive experiments with four CTR prediction networks on two real-world datasets and successfully scaled 128 times the original batch size without accuracy loss. In particular, for CTR prediction model DeepFM training on the Criteo dataset, our optimization framework enlarges the batch size from 1K to 128K with over 0.1% AUC improvement and reduces training time from 12 hours to 10 minutes on a single V100 GPU. Our code locates at https://github.com/bytedance/LargeBatchCTR.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译